From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers. (arXiv:2107.07999v5 [cs.LG] UPDATED)
In this paper we provide, to the best of our knowledge, the first
comprehensive approach for incorporating various masking mechanisms into
Transformers architectures in a scalable way. We show that recent results on
linear causal attention (Choromanski et al., 2021) and log-linear RPE-attention
(Luo et al., 2021) are special cases of this general mechanism. However by
casting the problem as a topological (graph-based) modulation of unmasked
attention, we obtain several results unknown before, including efficient
d-dimensional RPE-masking and graph-kernel masking. We leverage many
mathematical techniques ranging from spectral analysis through dynamic
programming and random walks to new algorithms for solving Markov processes on
graphs. We provide a corresponding empirical evaluation.
( 2
min )